At 16:21 +0300 on 03/05/1999, Frank Morton wrote:
> I have tried many combinations of things to speed this
> up as you all have suggested. I have had no success
> using "copy" at all because of problems with quotes
> and other punctuation in the data.
I must tell you, this doesn't sound reasonable to me. It's usually very
easy, if you already have a program that writes down the fields, to make
sure it scans the contents thereof and adds a backslash before each tab,
newline and backslash in every one of the fields.
If what you need is to dump-reload, then this is exactly what the dump
program will do for you.
> This last attempt, I bracket each insert statement with
> "begin;" and "end;".
Not necessary. An insert has an implicit begin-end around it. If you want
to change that behaviour, group several inserts together in begin-end.
Putting them individually is redundant.
> What I am seeing this time around is in the beginning,
> the inserts were reasonable in speed. Say 6 or 7
> per second. But now that it is up to record 100,000 or
> so (3 DAYS later) the time between inserts is about
> 10 SECONDS. As progress is made, the inserts
> continue to get slower and slower. So at the current
> rate, I have another 138 hours before completion!
Perhaps you need to break the connection. That is, every 1000 records,
disconnect from the backend, and connect again. A vacuum every so often may
also help somewhat.
Herouth
--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma